274 research outputs found

    The Isotonic Regression Framework

    Get PDF
    Categorizing continuous variables arises as an important task in statistical analysis, especially in analyzing dose-response relationships. Creating meaningful groups of the predictor variables regarding the outcome variable is desirable in many settings, especially if the form of the relationship is unknown. However it is not always obvious how many groups should be build and where the cutpoints should be placed. Usually more than one explanatory variable has to be included in the analysis, and therefore one has to apply an appropriate statistical model. For this purpose we need a simple approach to model the data without many requirements. Another important issue in statistical analysis and especially in toxicology studies is proving a dose response relationship: increasing response probability with increasing predictor variable. This theses deals with cases where categorization of numerical or categorical predictor variables results as an effect of the dose-response relationship. Isotonic regression is an alternative proposal when one wishes to establish a dose-response relationship, categorize continuous variables and estimate threshold values. The only assumption for this approach is the monotonicity in the response variable. The isotonic regression summarizes the description of n observations to l categories (level sets or solution blocks) by automatically splitting the predictor in constant risk groups. The result is always a step function, and therefore the isotonic regression can be used to fit a changepoint model. The Pooled Adjacent Violators Algorithm (PAVA) is used to fit the data. In relation to model fitting and testing, some problems arise when the response is binary, and in the present work the difficulties are highlighted and some proposals to solve them are given. Regarding isotonic regression and binary response, the isotonic test for trend, the reduced isotonic model, multidimensional isotonic models and methods to assess threshold limit values are discussed. The isotonic framework provides a reliable test for trend which unlike other widely used tests (the Cochran-Armitage test for example) is independent of any monotonic transformation of the dose variable and does not assume a linear shape. However the proposed large sample approximation (a weighted chi-square distribution) does not hold when the overall response probability is less than 5\% and thus exact methods are proposed in order to assess the correct p-value. In a simulation study it has been shown that the isotonic likelihood ratio test is more powerful than the Cochran-Armitage test, the Wilcoxon test and the Iso-chi-squared test. The model resulting from PAVA can become more parsimonious if the level sets which correspond to a non significant change for the response variable are eliminated. This model is called reduced isotonic regression. That can be accomplished by two means: a sequence of Fisher tests for the adjacent 2x2 tables or the application of a variation of a "closed testing" procedure. The correction for multiple comparisons is made for the first method by an a-priori estimation of the overall significance level in a permutation procedure. In the second method the control for the expense of the type I error is effected by the closure principal. To select between full isotonic and reduced model, a procedure based on parametric bootstrap is proposed. A simulation study proved that when the maximal coefficient of determination for the analyzed data set is at least 50% and the data can be represented by a step function, the reduced monotonic regression controls successfully the trade off between model complexity and goodness of fit. When more than one predictor is to be taken into account an additive isotonic model can be applied. Alternatively, an isotonic-surfaces model is proposed. This can be estimated by an iterative version of the Pooled Adjacent Violators Algorithm. The result is a sequence of surfaces which is monotonic in every dimension. This approach models interaction and categorizes the predictors in "multivariate" groups by combining them regarding restrictions to the outcome variable. This approach is very useful since, unlike the additive model, it can be easily combined with the reducing procedures to give a simple and interpretable model. However, for practical reasons a maximum of three predictors can be taken into account. A special aspect in analyzing dose-response relationships for a compound known to have harmful health effects, is to estimate a threshold limit value (TVL). On this regard a "hockey stick" threshold model is usually used. As alternative the use of a step function model by fitting the data using isotonic regression is proposed. A set of candidate threshold values is returned, and some threshold value estimation procedures are studied here. One of them starts from the isotonic model and applies the likelihood ratio test to detect the threshold value (method 1). Method 2 is based on the reduced isotonic regression. The performance of these two approaches is outlined in a simulation study under different scenarios and their properties are explored with categorical predictors. It is concluded that these methods possess a satisfactory power to reject the constant risk assumption, when a dose-response relationship exists as well as to estimate the actual threshold. Some limitations regarding the sample size and the force of trend are also discussed. A third method has also been presented. This modifies the closed testing procedure for the special case of thresholds, by setting one end of the regression line conditional to the other. All three threshold value estimation methods can be combined with the isotonic-surfaces model to provide thresholds, taking into account interactions between the predictor variables. The use of isotonic regression and its reduced version can also be extended to other settings. The capability of isotonic regression to be implemented in several models is outlined by describing how isotonic regression can model and test time-varying effects in Cox regression. The monotonic variation in the impact of a predictor included in the model during an observational period can be represented by a step function. An estimation of the time-dependent effect in the extended Cox model is presented based on isotonic regression framework. Smoothing the Schoenfeld residuals plotted against time applying PAVA, can reveal the changepoints without any a priori information about their location. The corresponding step function is then introduced in the model. The power of the Grambsch and Therneau test (which tests for time-variation in the effect of the predictors) can be improved if the isotonic transformation for the Schoenfeld residuals is used. Although this test appears to increase the type I error, its power is higher compared to conventional Grambsch and Therneau test and tests based on fractional polynomials. In summary it arises that isotonic framework is characterized by simplicity and stability. The main drawback underlying its application is the lack of asymptotic support in testing. This can make the use of isotonic models cumbersome since exact or bootstrap methods need to be used.Die Kategorisierung von stetigen Merkmalen erweist sich als eine sehr wichtige Aufgabe innerhalb statistischer Analysen, ganz besonders in der Analyse von Dosis-Wirkungs-Beziehungen. Es ist in vielen Situationen wĂŒnschenswert, sinnvolle Gruppen innerhalb der PrĂ€diktorvariablen zu finden und zu bilden. Dennoch bleibt oft die Frage, wieviele Gruppen gebildet werden sollen und wo genau die jeweiligen Grenzwerte liegen sollen. Wird mehr als eine erklĂ€rende Variable in die Analyse eingeschlossen, muss ein passendes statistisches Modell gefunden und angewendet werden. WĂŒnschenswert wĂ€re ein möglichtst einfacher Ansatz zur Modellierung der Daten, der wenige Voraussetzungen erfordert. Ein wichtiges Problem in der statistischen Analyse, besonders in toxikologischen Studien, ist der Nachweis von Dosis-Wirkungs Beziehung, d.h. wenn mit einem Ansteigen der erklĂ€renden Variablen auch eine Steigung der Wahrscheinlichkeit fĂŒr das Auftreten der Zielgrösse einhergeht. Diese Doktorarbeit behandelt Situationen, bei denen die Kategorisierung von stetigen oder kategorialen Variablen als Ergebnis der Analyse von Dosis-Wirkungs-Beziehung (DWZ) einhergeht. Isotone Regression liefert einen alternativen Ansatz, um eine Dosis-Wirkungs-Bezie-hung nachzuweisen, stetige Merkmale zu kategorisieren und Grenzwerte zu schĂ€tzen. Die einzige Voraussetzung bei diesem Ansatz ist die Monotonie in der Zielgrösse. Die isotone Regression fasst n verschiedene Beobachtungen in l verschiedene Blöcke zusammen, indem sie die PrĂ€diktoren in Gruppen mit jeweils konstantem Risiko einteilt. Da das Resultat eine Treppenfunktion ist, kann die isotone Regression benutzt werden, um Schwellenwerte zu erkennen. Der Pool Adjacent Violators Algorithmus (PAVA) setzt diesen nicht-parametrischen Ansatz um. Bei binĂ€rer Zielgrösse entstehen hier Probleme bezĂŒglich der ModellschĂ€tzung und der Modelltests. Ein Hauptaugenmerk dieser Arbeit liegt auf der genauen Untersuchung dieser Probleme und bietet teilweise LösungsvorschlĂ€ge an. BezĂŒglich der Isotonen Regression mit binĂ€rer Zielgrösse werden mehrere Gebiete genauer diskutiert: das reduzierte isotone Modell, das multidimensionale isotone Modell und Methoden zur Bewertung von Schwellenwerten. Der isotone Ansatz liefert auch einen Trendtest, der, im Gegensatz zu anderen Trendtests (wie z.B. der Cochran-Armitage Test), unbeeinflusst von monotonen Transformationen der Dosisvariable ist und auch keinen linearen Zusammenhang voraussetzt. Die vorgeschlagene asymptotische Verteilung (eine gewichtete Chi-Quadrat Verteilung) liegt nicht vor, wenn die Wahrscheinlichkeit fĂŒr das Auftreten der Zielgrösse unter 5% sinkt. Hier sind exakte Methoden erforderlich, die einen genauen P-Wert bestimmen. In einer Simulationsstudie konnte gezeigt werden, dass dieser isotone Likelihood-Quotienten-Test eine grössere Power besitzt als der Cochran-Armitage-Test, der Wilcoxon Test und der Iso-Chi-Quadrat-Test. Das isotone Modell kann noch vereinfacht werden, indem die Blöcke, die einen nicht-signifikanten Einfluss haben, zusammengefasst werden. Hierzu wurden zwei verschiedene Methoden verglichen: einer Sequenz von exakten Fisher-Tests fĂŒr die benachbarten Blöcke sowie eine Variante eines "closed testing" Prozesses. Die Korrektur fĂŒr multiple Vergleiche des P-Wertes wird bei der ersten Methode durch eine a-priori SchĂ€tzung des Gesamtsignifikanzniveaus mittels eines Permutationsverfahrens erreicht. Bei der zweiten Methode ist die Kontrolle des Fehlers erster Art durch das Einschliessungsverfahren beeinflusst. Um letztendlich zwischen dem vollen Modell und seinem reduzierten Ă€quivalent zu entscheiden, wurde ein parametrisches Bootstrap-Verfahren vorgeschlagen. In einer Simulationsstudie zeigte sich, wenn der maximale Koeffizient fĂŒr die Daten mindestens 50% betragen soll und die Daten durch eine Treppenfunktion dargestellt werden können, dann stellt die reduzierte isotone Regression einen guten Kompromiss zwischen hoher ModellkomplexitĂ€t und GĂŒte dar. Wurde mehr als eine PrĂ€diktorvariable berĂŒcksichtigt, dann kann ein additives Modell verwendet werden. Alternativ hierzu wurde ein "isotone-FlĂ€che"-Modell vorgeschlagen. Dieses kann mittels einer iterativen Version des PAVA geschĂ€tzt werden und resultiert in einer Sequenz von FlĂ€chen, die in jeder Dimension monoton sind. Es werden hierbei Interaktionen modelliert und die PrĂ€diktoren in multidimensionale Gruppen bezĂŒglich bestimmter EinschrĂ€nkungen der Zielgrösse unterteilt. Dieser Ansatz ist sehr elegant, da er, im Gegensatz zum additiven Modell, leicht mit dem Reduzierungsverfahren kombiniert werden kann, und so einfache und leicht interpretierbare Modelle liefert. Aus praktischen GrĂŒnden können hierbei jedoch nur bis zu maximal drei PrĂ€diktorvariablen in das Modell genommen werden. Die SchĂ€tzung von Schwellenwerten fĂŒr Stoffe, die sich bekanntermassen negativ auf die Gesundheit auswirken, ist von grösster Bedeutung in der Epidemiologie. In diesem Zusammenhang wird normalerweise ein "hockey stick"-Schwellenwertmodell angewandt. Alternativ wurde ein Modell vorgeschlagen, das auf dem Resultat einer isotonen Regression, also einer Treppenfunktion, basiert. Es gilt aus einer Reihe von Schwellenwerten einen Wert auszuwĂ€hlen. Verschiedene SchĂ€tzer wurden untersucht. Eine Methode setzt beim isotonen Modell an und fĂŒhrt einen Likelihood-Quotienten-Test durch. Die zweite Methode basiert auf der reduzierten isotonen Regression. Die Leistung der beiden Algorithmen wurde in einer Simulationsstudie kurz dargestellt. Die Eigenschaften wurden hierzu in verschiedenen Situationen mit kategorialen Einflussgrössen untersucht. Falls eine Dosis-Wirkungs-Beziehung vorliegt, erweisen sich diese zwei Methoden als ausreichend mĂ€chtig, um die Hypothese "das Risiko Ă€ndert sich nicht" zu verwerfen. Sie sind zufriedenstellend bezĂŒglich ihrer FĂ€higkeit, den Schwellenwert zu schĂ€tzen. Einige EinschrĂ€nkungen, entstehend aus der Stichprobengrösse und dem Einfluss des Trends, wurden ebenso diskutiert. Als dritte Methode wurde eine Modifikation der "closed testing" Verfahren vorgeschlagen. Dabei stellt sie ein Ende der Regressionslinie in AbhĂ€ngigkeit zum anderen Ende dar. Alle drei SchwellenwertschĂ€tzer können mit dem "isotone-FlĂ€che"-Modell kombiniert werden, unter BerĂŒcksichtigung von Interaktion zwischen den Einflussgrössen. Die Implementierung der isotonen Regression in verschiedene Modelle wird exemplarisch hervorgehoben in einer Anwendung der isotonen Regression im Cox-Modell mit zeitverĂ€nderlichen Effekten. Die monotone Variation des Einflusses eines PrĂ€diktors ĂŒber eine bestimmte Zeitperiode kann durch eine Treppenfunktion dargestellt werden. Eine SchĂ€tzung der zeitabhĂ€ngigen Effekte im erweiterten Cox-Modell, basierend auf isotoner Regression, wurde beschrieben. Werden geglĂ€ttete Schoenfeld-Residuen gegen die Zeit in einem Diagramm eingetragen, unter Zuhilfenahme von PAVA, können auch ohne a-priori Informationen ĂŒber ihre Lage, Grenz-werte gefunden werden. Die Power des Grambsch-Therneau Tests zu Untersuchung der VerĂ€nderung des Einflusses eines PrĂ€diktors ĂŒber die Zeit, kann verbessert werden, wenn die Schoenfeld-Residuen mittels PAVA transformiert werden. Obwohl dieser Test scheinbar den Fehler erster Art erhöht, ist seine Power höher im Vergleich zu herkömmlichen Grambsch-Therneau-Test sowie zu Tests, die auf fraktionalen Polynomen basieren. Abschliessend bleibt zu sagen, dass sich meiner Meinung nach die Analyse mittels isotoner Methoden durch Einfachheit und StabilitĂ€t auszeichnet. Ihr Hauptnachteil liegt in dem Mangel an asymptotischen Hilfestellungen beim Testen. Dies kann die Verwendung von isotonen Modellen erschweren, da dann exakte oder bootstrap Methoden verwendet werden mĂŒssen

    Studies of prevalence: how a basic epidemiology concept has gained recognition in the COVID-19 pandemic.

    Get PDF
    BACKGROUND Prevalence measures the occurrence of any health condition, exposure or other factors related to health. The experience of COVID-19, a new disease caused by SARS-CoV-2, has highlighted the importance of prevalence studies, for which issues of reporting and methodology have traditionally been neglected. OBJECTIVE This communication highlights key issues about risks of bias in the design and conduct of prevalence studies and in reporting them, using examples about SARS-CoV-2 and COVID-19. SUMMARY The two main domains of bias in prevalence studies are those related to the study population (selection bias) and the condition or risk factor being assessed (information bias). Sources of selection bias should be considered both at the time of the invitation to take part in a study and when assessing who participates and provides valid data (respondents and non-respondents). Information bias appears when there are systematic errors affecting the accuracy and reproducibility of the measurement of the condition or risk factor. Types of information bias include misclassification, observer and recall bias. When reporting prevalence studies, clear descriptions of the target population, study population, study setting and context, and clear definitions of the condition or risk factor and its measurement are essential. Without clear reporting, the risks of bias cannot be assessed properly. Bias in the findings of prevalence studies can, however, impact decision-making and the spread of disease. The concepts discussed here can be applied to the assessment of prevalence for many other conditions. CONCLUSIONS Efforts to strengthen methodological research and improve assessment of the risk of bias and the quality of reporting of studies of prevalence in all fields of research should continue beyond this pandemic

    What is a multiple treatments meta-analysis?

    Get PDF
    Standard meta-analyses are an effective tool in evidence-based medicine, but one of their main drawbacks is that they can compare only two alternative treatments at a time. Moreover, if no trials exist which directly compare two interventions, it is not possible to estimate their relative efficacy. Multiple treatments meta-analyses use a meta-analytical technique that allows the incorporation of evidence from both direct and indirect comparisons from a network of trials of different interventions to estimate summary treatment effects as comprehensively and precisely as possible

    Estimating the sample size of sham-controlled randomized controlled trials using existing evidence [version 2; peer review: 2 approved].

    Get PDF
    Background: In randomized controlled trials (RCTs), the power is often 'reverse engineered' based on the number of participants that can realistically be achieved. An attractive alternative is planning a new trial conditional on the available evidence; a design of particular interest in RCTs that use a sham control arm (sham-RCTs). Methods: We explore the design of sham-RCTs, the role of sequential meta-analysis and  conditional planning in a systematic review of renal sympathetic denervation for patients with arterial hypertension. The main efficacy endpoint was mean change in 24-hour systolic blood pressure. We performed sequential meta-analysis to identify the time point where the null hypothesis would be rejected in a prospective scenario. Evidence-based conditional sample size calculations were performed based on fixed-effect meta-analysis. Results: In total, six sham-RCTs (981 participants) were identified. The first RCT was considerably larger (535 participants) than those subsequently published (median sample size of 80). All trial sample sizes were calculated assuming an unrealistically large intervention effect which resulted in low power when each study is considered as a stand-alone experiment. Sequential meta-analysis provided firm evidence against the null hypothesis with the synthesis of the first four trials (755 patients, cumulative mean difference -2.75 (95%CI -4.93 to -0.58) favoring the active intervention)). Conditional planning resulted in much larger sample sizes compared to those in the original trials, due to overoptimistic expected effects made by the investigators in individual trials, and potentially a time-effect association. Conclusions: Sequential meta-analysis of sham-RCTs can reach conclusive findings earlier and hence avoid exposing patients to sham-related risks. Conditional planning of new sham-RCTs poses important challenges as many surgical/minimally invasive procedures improve over time, the intervention effect is expected to increase in new studies and this violates the underlying assumptions. Unless this is accounted for, conditional planning will not improve the design of sham-RCTs

    Immunomodulators and immunosuppressants for relapsing-remitting multiple sclerosis: a network meta-analysis

    Get PDF
    Different therapeutic strategies are available for the treatment of people with relapsing-remitting multiple sclerosis (RRMS), including immunomodulators, immunosuppressants and biologics. Although there is consensus that these therapies reduce the frequency of relapses, their relative benefit in delaying new relapses or disability worsening remains unclear due to the limited number of direct comparison trials

    Family-Based versus Unrelated Case-Control Designs for Genetic Associations

    Get PDF
    The most simple and commonly used approach for genetic associations is the case-control study design of unrelated people. This design is susceptible to population stratification. This problem is obviated in family-based studies, but it is usually difficult to accumulate large enough samples of well-characterized families. We addressed empirically whether the two designs give similar estimates of association in 93 investigations where both unrelated case-control and family-based designs had been employed. Estimated odds ratios differed beyond chance between the two designs in only four instances (4%). The summary relative odds ratio (ROR) (the ratio of odds ratios obtained from unrelated case-control and family-based studies) was close to unity (0.96 [95% confidence interval, 0.91–1.01]). There was no heterogeneity in the ROR across studies (amount of heterogeneity beyond chance I(2) = 0%). Differences on whether results were nominally statistically significant (p < 0.05) or not with the two designs were common (opposite classification rates 14% and 17%); this reflected largely differences in power. Conclusions were largely similar in diverse subgroup analyses. Unrelated case-control and family-based designs give overall similar estimates of association. We cannot rule out rare large biases or common small biases

    Estimating Patient-Specific Relative Benefit of Adding Biologics to Conventional Rheumatoid Arthritis Treatment: An Individual Participant Data Meta-Analysis.

    Get PDF
    IMPORTANCE Current evidence remains ambiguous regarding whether biologics should be added to conventional treatment of rheumatoid arthritis for specific patients, which may cause potential overuse or treatment delay. OBJECTIVES To estimate the benefit of adding biologics to conventional antirheumatic drugs for the treatment of rheumatoid arthritis given baseline characteristics. DATA SOURCES Cochrane CENTRAL, Scopus, MEDLINE, and the World Health Organization International Clinical Trials Registry Platform were searched for articles published from database inception to March 2, 2022. STUDY SELECTION Randomized clinical trials comparing certolizumab plus conventional antirheumatic drugs with placebo plus conventional drugs were selected. DATA EXTRACTION AND SYNTHESIS Individual participant data of the prespecified outcomes and covariates were acquired from the Vivli database. A 2-stage model was fitted to estimate patient-specific relative outcomes of adding certolizumab vs conventional drugs only. Stage 1 was a penalized logistic regression model to estimate the baseline expected probability of the outcome regardless of treatment using baseline characteristics. Stage 2 was a bayesian individual participant data meta-regression model to estimate the relative outcomes for a particular baseline expected probability. Patient-specific results were displayed interactively on an application based on a 2-stage model. MAIN OUTCOMES AND MEASURES The primary outcome was low disease activity or remission at 3 months, defined by 3 disease activity indexes (ie, Disease Activity Score based on the evaluation of 28 joints, Clinical Disease Activity Index, or Simplified Disease Activity Index). RESULTS Individual participant data were obtained from 3790 patients (2996 female [79.1%] and 794 male [20.9%]; mean [SD] age, 52.7 [12.3] years) from 5 large randomized clinical trials for moderate to high activity rheumatoid arthritis with usable data for 22 prespecified baseline covariates. Overall, adding certolizumab was associated with a higher probability of reaching low disease activity. The odds ratio for patients with an average baseline expected probability of the outcome was 6.31 (95% credible interval, 2.22-15.25). However, the benefits differed in patients with different baseline characteristics. For example, the estimated risk difference was smaller than 10% for patients with either low or high baseline expected probability. CONCLUSIONS AND RELEVANCE In this individual participant data meta-analysis, adding certolizumab was associated with more effectiveness for rheumatoid arthritis in general. However, the benefit was uncertain for patients with low or high baseline expected probability, for whom other evaluations were necessary. The interactive application displaying individual estimates may help with treatment selection
    • 

    corecore